Asher Ross - Supervising Producer
Markus Zakaria - Audio Producer and Sound Designer
Rafaela Siewert - Associate Podcast Producer
-
Paul ScharreSenior Fellow and Director of the Technology and National Security Program, Center for a New American Security
-
Toby WalshProfessor of Artificial Intelligence at The University of New South Wales
-
Mary WarehamAdvocacy Director, Arms Division at Human Rights Watch
Transcript
So, if I say the words killer robot, you might imagine something that sounds like this:
Terminator: I am a friend of Sarah Conner. Where is she? I’ll be back.
The Terminator, a walking, talking, fully autonomous machine designed to hunt and kill humans. Scary stuff. But, according to the experts, the killer robots we really need to worry about might sound something a bit more like this: [Buzzing sound].
No, not genetically modified killer bees. A swarm of tiny, lethal drones. Right now drones are guided remotely by humans. But as technology is developing, so is the ability to forgo human guidance altogether.
FOX NEWS: 0:28: The pentagon investing billions of dollars to develop autonomous weapons, machines that could one day kill on their own.
https://youtu.be/IAvbkcSLlYQ?t=28
FRANCE 24: 1:33: This as several countries come closer each year to creating the real thing.
https://youtu.be/cA6YG2v3Glk?t=93
WOCHIT NEWS: 0:00: Tesla and SpaceX CEO, Elon Musk, is reportedly seeking a global ban on lethal autonomous weapons.
https://www.youtube.com/watch?v=OslgCLRLUJE
I’m Gabrielle Sierra and this is Why It Matters. Today, grappling with the reality of killer robots.
SIERRA: So what is a lethal autonomous weapon?
Paul SCHARRE: A lethal autonomous weapon is quite simply a weapon that makes its own decisions about whom to kill on the battlefield. It would be a weapon that’s designed by people, built by people, put into operation by people. But once released, can search over the battlefield for targets and then, based on its programming, can decide which targets to attack all by itself, without any further human involvement.
This is Paul Scharre. He’s a senior fellow at the Center for a New American Security. He’s also the author of Army of None: Autonomous Weapons and the Future of War. Paul served as an army ranger in both Afghanistan and Iraq.
SIERRA: So what’s the difference between that and a drone?
SCHARRE: Well, drones today, which are widely used around the globe, are largely remote-controlled. So there are people behind drones, people flying them, people looking down the cameras, people making decisions about releasing a missile and deciding who to kill with drones today. The technology is taking us towards a place where with each generation of military robotics, there is more and more automation. So much like we see with automobiles, where each generation of cars has incrementally more autonomy.
Features like intelligent cruise control, automatic braking, self-parking. We’re seeing the same thing with military robots, that each generation has more autonomy in automated takeoff and landing or maneuvering by itself. People are still in charge of weapons today, but the technology’s going to make it possible to hand over control to machines.
There are so many good movies about this, The Matrix, Ex Machina, 2001: A Space Odyssey, that we tend to imagine the problem being machines that turn against their human creators.
SCHARRE: Yeah, we’re certainly not talking about science-fiction visions of robots building robots and launching them on their own. That’s not in the cards anytime in the near future. So that’s the good news. At some level a human is always involved. What we’re really talking about is changing where the human is involved.
SIERRA: But Isn’t the idea that your car helping you park itself, or a drone being able to make these decisions, isn’t it based on something that we’re thinking of as progress?
SCHARRE: Well, certainly one argument for autonomous weapons is that just like self-driving cars will someday reduce deaths on the roads and make roads safer, maybe machines can do the same in war. Maybe they could reduce accidents, they could better discriminate between civilians and enemy combatants. And they could reduce civilian harm in war. You know, the main argument is that people don’t always do a great job in war. People make mistakes. There are accidents. People commit war crimes. And perhaps machines can do better. So for example, some decisions have a correct answer. Is this person in a combat environment holding a rifle in their hands, or are they holding a rake?
SIERRA: Right.
SCHARRE: And we probably could someday build machines to do that more reliably than people.
But, of course, it isn’t always that simple.
SCHARRE: Let’s say we’ve decided this person’s definitely holding a rifle. They could be friendly forces. They could a civilian who’s armed who’s protecting their land. Even if you’ve identified that they’re an enemy combatant, is it the right call to shoot them in this instance? There might be situations where it’s not very tactical, it’s going to expose you to the enemy, so it’s a bad idea.
I served as an Army Ranger on the ground in Iraq and Afghanistan, did a number of tours overseas, and I think one of the things that I took away from that was just the messiness of combat, the uncertainty, the fog of war. And I saw a lot of situations where it’s just hard for me to imagine a machine making the right call.
I’ll give you one example that kind of stuck with me. There was an incident in Afghanistan where I was part of a reconnaissance unit up along the Afghanistan-Pakistan border, and we saw someone approaching us, and we didn’t know whether this person was a goat herder or an enemy combatant that was coming to ambush us. So I went to get a better eye on him and try to discern, you know, what kind of situation we were dealing with. And I moved into a position where I could see him and he couldn’t see me. And I saw him sitting on the edge of a cliff looking out over the valley, and I could hear him talking, but I didn’t see who he was talking to. Maybe there was a group of fighters that he’d linked up with, or maybe he was talking to himself or talking to his goats. Or maybe he was talking over the radio and was reporting back information about us.
So I settled into a position where I could watch him and, if need be, shoot him if I had to, if he turned out to be an enemy fighter. And after a couple of minutes, he started singing. And that struck me as a really strange thing to be doing if he was an enemy fighter, right?
SIERRA: Right.
SCHARRE: Like singing out information about us over the radio. It seemed really odd. And that calmed my fears, and it seemed to me like, well, with this addition of just a tiny bit of information, that changed the whole way in which I was looking at this situation. And he was enjoying what was, in fact, a beautiful afternoon, looking out over the Afghan countryside. And so I watched him for a few more minutes, and then I left and felt very comfortable. And it seems to me that it’s real hard for me to imagine how a machine could make that right call in that situation.
Toby WALSH: They call it the fog of war for very good reason.
This is Toby Walsh, he’s one of the world’s leading experts on artificial intelligence and he's a member of the Campaign to Stop Killer Robots. He spoke to us via Skype from Germany, where he teaches at The Technical University of Berlin.
WALSH: It is actually very difficult to work out what’s going on. And we have plentiful evidence that the sorts of computer vision systems we have today are very easily fooled. We have lots of examples of how you can change a single pixel and spoof a computer vision system. And of course, let’s not forget. People are trying to fool you. And it’s quite easy, today at least, to fool these systems.
There seems to be consensus that killer robots, as they might exist today, could make a lot of mistakes identifying and selecting targets. Just think of every time you repeatedly yell instructions at your Alexa or when Siri randomly asks you a question. So it’s no surprise that AI isn’t even close to making the type of decision that Paul made that day in Afghanistan.
WALSH: Equally, I can imagine a future in 20 or 30 years’ time when perception is much better, and perhaps the computers are making fewer mistakes than humans. But then I start to have other concerns. Then we have weapons that it will be impossible to defend yourself against. It would change the very nature of battle. It would change the triggers for war. It will perhaps accelerate us into situations where we fight wars that we wouldn’t have fought otherwise. And ultimately, I believe, like many of my colleagues, that these would become weapons of mass destruction. That previously when you wanted to fight war you had to have an army. You had to persuade an army of people to do your evil intent. But now, with these sorts of weapons, you wouldn’t need an army. You would need one programmer. It would allow you to scale war in a way that only other weapons of mass destruction—like nuclear weapons, like chemical weapons, like biological weapons—allow you to fight war, where a limited number of people can cause a huge amount of harm.
At this point, it only feels natural to ask, why on earth would anyone want to create autonomous weapons?
SCHARRE: Now, it’s worth pointing out that there is also a military advantage dimension to this that’s leading militaries to certainly invest in more automation and artificial intelligence. It’s this idea that machines can react faster than humans, and that could lead to a compelling advantage on the battlefield.
Computers, and the speed they provide, have been part of war since they were invented. Battleships are already using AI to shoot down incoming missiles. But there’s a flip side.
SCHARRE: Though there’s a problem with that, which is that if the machines make a mistake, they could be making mistakes at machine speed.
SIERRA: Right, so fast that we can’t come in and fix it, we can’t interrupt.
SCHARRE: Right, exactly. There’s value in finding ways to accelerate the decision-making of your military forces so they can operate faster and adapt to a changing battlefield environment faster than the adversary. But, you don’t want to lose control of them.
WALSH: We see what happens when we put complex, interacting, autonomous machines together in an adversarial setting. It’s called the stock market.
Yes, the stock market. Okay, bear with us here, because this isn’t as big of a jump as it might seem. Both Toby and Paul emphasized that the stock market has shown us how artificial intelligence can wreak havoc so fast that humans have no chance to intervene.
WALSH: And we see frequently cases where we have flash crashes where the stock market gets itself into unexpected feedback loops and does things that we don’t intend.
SCHARRE: So the big flash crash in 2010 was an incident where, through a combination of factors the stock market basically took a sudden plunge within a few minutes, driven in part by machines operating faster than people could. The combination of algorithms, making decisions about trading, high-frequency trading, almost like a wildfire that explodes on the stock market, that causes machines to engage in these very fast and somewhat irrational behaviors, and it can cause the market to take a sudden dive.
WALSH: Now, with the stock market, we can and we do unwind those transactions and say, I’m very sorry, something went wrong. And the algorithms got themselves into some unexpected loops and unwind all the unintended behaviors.
When flash crashes occur, banks and regulators hit the pause button, go through the records, and make sure everyone gets their money back.
SCHARRE: What’s interesting about this is there’s no equivalent in warfare. So there’s no referee to call timeout anymore if you have a flash war, if you had machines, whether in physical space or cyberspace, start interacting in a way that got out of control, that people didn’t want.
SIERRA: It would all happen too fast.
WALSH: If it’s robots facing each other in a contested place, like the DMZ in North Korea, unfortunately, we won’t be able to unwind what’s happened. People will be killed. We might be at war with North Korea. And that would be a very unfortunate circumstance to have, that we end up fighting a war that we didn’t intend to fight. It will then change the scale and the speed, possibly the duration of warfare itself.
SIERRA: I mean, you can’t always be like, it was the robot that did it, not me. We’re cool.
SCHARRE: Right. Well, that’s the problem, right? Even if you said that, would people believe you? Would it matter, right? I mean, if a robot made some decision and killed people, would a country’s population even care?
SIERRA: Who’s accountable? Or responsible?
SCHARRE: Right. And maybe the political pressures would be so great that a nation’s leaders would have to retaliate.
Alright. So let’s take stock. Militaries across the world are investing in this technology. Not only to gain an advantage in war but because it could help save the lives of soldiers and civilians. And yet the technology could get things wrong, and lead to wars that spiral out of control in seconds. And that could risk the lives of soldiers and civilians. Some experts argue that the dangers and the unknowns are so serious that humanity should ban autonomous weapons forever.
One of these experts is Mary Wareham. She’s the advocacy director in the Arms Division at Human Rights Watch. Mary coordinates something called the Campaign to Stop Killer Robots, an entire organization dedicated to a global ban.
SIERRA: Can you tell me a little bit about what the Campaign to Stop Killer Robots is?
Mary WAREHAM: Back in 2013, about eight nongovernmental organizations, including my own, Human Rights Watch, decided to co-found the Campaign to Stop Killer Robots. The campaign calls for a prohibition on the development, production, and use of fully autonomous weapons. The governments call these lethal autonomous weapon systems. We decided to call it the Campaign to Stop Killer Robots because that’s, I think, a bit more accessible to the public, and it’s a conversation starter.
SIERRA: For sure.
WAREHAM: And it certainly has started the conversation. I’m often asked to paint the dystopian scenario. You know, what’s the worst-case future that we face if you fail. And really, I don’t like to think about it. I like to think about how we’re succeeding. These are the people who prohibited landmines, and cluster munitions, who have worked to create the Arms Trade Treaty. We, in civil society, know what we’re doing, and we know the power that can come when like-minded countries work together with us and with the Red Cross. It is possible to create new law.
SIERRA: So there’s a path for this, to keep developing the tech that we feel is important to move forward as a society while also drawing a line.
WAREHAM: This is how chemical weapons were dealt with in the past. That didn’t eradicate the field of chemistry. Or prevent, you know, R&D in that sector. But they prevented the use of chemicals as a weapon in warfare. So, yes, it’s possible.
It isn’t just organizations like the Campaign to Stop Killer Robots that are speaking out against lethal autonomous weapons.
WAREHAM: Pakistan was the first country to come out against lethal autonomous weapon systems. And basically, it said, as the victim of armed drone strikes, if this is what’s coming next, we want no part of it. They have become one of the most ardent proponents of the call for a ban on fully autonomous weapons.
Other countries, we’re up to twenty-nine states now that are calling for a ban on killer robots. Most of them are from Latin America, from Africa, some from Asia, many from the Middle East. Many of them are countries that have been through terrible conflicts and want to prevent that from happening again.
Mary knows a thing or two about banning weapons. In 1997, she was working with the International Campaign to Ban Landmines when it won the Nobel Peace Prize.
SIERRA: Does your experience working to ban landmines give you hope about banning lethal autonomous weapons?
WAREHAM: Certainly. We were called utopian and hopelessly unrealistic at the beginning of the Campaign to Ban Landmines. We set our sights on a weapon that had been in widespread use, that had proliferated around the world, that it was in the stockpiles of more than eighty countries. But once we had the framework of the Mine Ban Treaty in 1997, 122 countries signed up to it in Ottawa that year. It’s truly a lifesaving treaty. And this is a similar thing that we’re trying to do here on killer robots. We’re trying to stigmatize the removal of meaningful human control from the use of force. This is an unprecedented opportunity to kind of prevent a disaster before it occurs. And I think that political leaders are starting to wake up to that fact, and realize that if they do something bold here they will be remembered for it.
WALSH: We’re not going to be able to keep the technology itself from being invented because it’s the same technology that pretty much that go into autonomous cars or go into autonomous drones. But we can ensure that arms companies don’t sell the technologies, that they don’t turn up in the arms bazaars and black markets of the world, and they’re not falling into the hands of all the wrong people of, and that’s only another risk that we haven’t talked about, about how if they are being widely sold by arms companies they will be falling into the hands of terrorists and rogue states. And they are the perfect weapons for terrorists and rogue states to use against civilians and our own populations.
We can all agree we don’t want a world in which terrorists have autonomous drones. But some believe this may happen whether or not powerful states develop them.
SCHARRE: The technology is already so diffuse that it’s well within the ability of nonstate actors, or individuals even, to build crude autonomous weapons on their own.
SIERRA: Wow.
SCHARRE: We’ve seen, yeah, it’s terrifying.
SIERRA: Yeah, truly.
SCHARRE: You know, one of the things I looked at for my book, Army of None, was how easy would it be for somebody to build a do-it-yourself autonomous weapon in their garage? And the answer is, it is terrifyingly possible. All of the basic elements exist and are pretty widely available today. You’d need a platform. And you can buy a drone online for a few hundred bucks that’s pretty autonomous. That could do autonomous takeoff and landing. It could fly a GPS-programmed route all on its own. You’d need to put a weapon on it. And you’ve seen individuals do this here in the U.S. A couple of years ago there was a teenager up in Connecticut who posted a video on YouTube of a flamethrower that he put on a drone, and he basted a turkey for Thanksgiving with it. What’s hard is not making an autonomous weapon. It’s making an autonomous weapon that is discriminate, that operates in a manner that’s lawful. And for a terrorist, the fact that an autonomous weapon might kill the wrong people is a feature, not a bug.
It may be that the world can’t avoid crude autonomous weapons in the hands of terrorists. But this doesn’t deter those who want to regulate killer robots. And that’s because the terrible but limited threat of homemade drones pales in comparison to large-scale automated conflict between powerful nations.
WALSH: I am pretty convinced that we will ultimately see these weapons and decide that they are horrible, a horrific, terrible, terrifying way to fight war, and we’ll ban them. I’m actually very confident. What I’m not confident about is that we will have the courage and conviction and foresight to do that in advance of them being used against civilian populations. The sad history of military technology is that we have normally, in most instances, had to see weapons being used in anger before we’ve had the intelligence to ban them.
SCHARRE: When you look at historical attempts to regulate and ban weapons, the most successful efforts are ones that have very clear bright lines, that say, you know, it’s unacceptable in all situations to use, for example, chemical weapons. And rules that are fuzzy or that have some gray area don’t fare as well in war. So for example, early in World War II, there were efforts to constrain aerial bombardment only to military targets and not attack cities. That broke down over time as there were mistakes, as bombers, you know, lost their way in the dark. There was one incident where a German bomber strayed off course in the night and bombed central London by mistake that caused Churchill to order the British air force to retaliate, bomb Berlin, and after that, the gloves came off, and Hitler launched the London Blitz. So it’s much harder to achieve effective restraint among militaries, even when both sides desire restraint when the rules aren’t very clear, or there is some risk of militaries sort of accidentally crossing the line. And that’s a real problem for autonomous weapons.
SIERRA: But if they had been using an autonomous weapon, it wouldn’t have gotten lost and accidentally bombed London. Isn’t that—
SCHARRE: Well, and that’s—
SIERRA: —the argument?
SCHARRE: That’s a great point. I mean, that’s a great point, right, which is to say that this advantage of precision and reliability is one of the reasons why autonomous weapons might do better in some settings.
The ambiguity around lethal autonomous weapons runs deep. Military specialists and many policymakers feel that they have a duty to develop them. But the consensus bends far the other way when it comes to scientists and engineers. In 2015, over 24,000 AI specialists signed a pledge, not to participate in the development of autonomous weapons. These included Stephen Hawking, Elon Musk, and Steve Wozniak.
WALSH: It’s very similar, I suspect, the numbers of scientists who are absolutely convinced that climate change is human-made, and an existential risk to our society today, and the few who are still willing to entertain other causes of climate change and the fact that it is not an existential risk.
SIERRA: So what are your most serious near-term concerns?
SCHARRE: We’re in a place today where AI technology has exploded over the past couple years and we are as a society grappling with its consequences in a whole range of industries—in transportation, in healthcare, in finance, and others.
A couple challenges in that conversation are that one, the pace of technology is moving much, much faster than the pace of diplomacy. Each year we’re seeing just huge leaps in what AI and machine learning are able to do. And we’re making progress diplomatically in really understanding the issue, but it’s just moving at a much, much, much slower pace.
WALSH: It takes us to a very difficult moral place to hand over the decision to machines as to who lives and who dies. You can’t hold machines accountable. They are not moral beings. They don’t have our empathy. They, themselves, are not risking their lives. There are plentiful, I think, moral arguments as to why we can’t give those sorts of decisions to machines, ultimately because we can’t hold them accountable.
All of this had me thinking about Paul's story. Of the singing goat herder in Afghanistan. And a single life or death decision made by a human being.
SCHARRE: In the big picture of the war, you know, did that decision really matter? No. That wasn’t, you know, the outcome of the Afghanistan War did not hinge upon whether I made the right call on that instance.
SIERRA: Right.
SCHARRE: But it sure mattered to that man.
SIERRA: And to you.
SCHARRE: And to me. I mean, no matter what happened, I had to live with that decision that I was going to make, and I wanted to make the right call there. And that gets to another important point about autonomous weapons, which is to say that there’s a lot of value in keeping people morally engaged in these decisions, not just so that people make the right call but so that people understand the stakes.
Even if machines could make all the right decisions, machines aren’t going to feel the moral impact of those decisions. And it’s a challenging thing to talk about because having people invested in these decisions has a cost. People suffer moral injury. They suffer PTSD. There’s a real cost to that, and it’s unfair that as a society we make a decision as a nation as a democratic nation, to go to war, and we ask a very small subset of the population to bear that moral burden for us to make these decisions and then to have to live with them. But it’s also worth asking, what would it mean for us as a society if nobody bore that burden. If no one slept uneasy at night and no one went to bed and say, well, I was responsible for what happened on the battlefield today? And I think that would be a problem.
Killer robots - pretty heavy stuff. It’s a lot to think about, so if you want to learn more, visit us at CFR.org/Whyitmatters where we have a page for this episode, that includes some show notes, a transcript, and a bunch of articles and resources where you can take a deeper dive.
Feel free to reach out to us and say hey at [email protected]. And please subscribe on Apple Podcasts, Spotify, Stitcher, or wherever you get your audio. If you like the show, please leave us a review, give us some stars, and talk about us to your friends. Word of mouth is super important and we appreciate it.
Why It Matters is a production of the Council on Foreign Relations. The show is created and produced by Jeremy Sherlick, Asher Ross, and me, Gabrielle Sierra. Our sound designer is Markus Zakaria. Robert McMahon is our Managing Editor, and Doug Halsey is our Chief Digital Officer. Christian Wolan is our Product Manager. Our researchers are Menyae Christopher and Rafaela Siewert. Original music was composed by Ceiri Torjussen. Special thanks to Richard Haass, Jeff Reinke.For Why It Matters, this is Gabrielle Sierra, signing off. See you next time!
Show Notes
Lethal autonomous weapons, sometimes called killer robots, are military-grade weapons controlled by artificial intelligence. Some see these devices as an opportunity, while others consider their development to be a major threat. This episode lays out the risks they pose and the controversy around them.
From CFR
“Laying Down the LAWS: Strategizing Autonomous Weapons Governance,” Taylor Sullivan
“The Pentagon Plans for Autonomous Systems,” Micah Zenko
Read More
“Coming Soon to a Battlefield: Robots That Can Kill,” Atlantic
“The Pentagon’s ‘Terminator Conundrum’: Robots That Could Kill on Their Own,” New York Times
“China and the U.S. Are Fighting a Major Battle Over Killer Robots and the Future of AI,” TIME
“UK, US and Russia among those opposing killer robot ban,” Guardian
“Tech leaders: Killer robots would be ‘dangerously destabilizing’ force in the world,” Washington Post
“Death by algorithm: the age of killer robots is closer than you think,” Vox
Watch or Listen
“The Dawn of Killer Robots,” Motherboard
Podcast with Gabrielle Sierra and Varun Sivaram December 3, 2024 Why It Matters
Podcast with Gabrielle Sierra, Robert McMahon and Carla Anne Robbins November 14, 2024 Why It Matters
Podcast with Gabrielle Sierra, Adam Segal and Rana Foroohar November 13, 2024 Why It Matters